Least Squares Optimization with L1-Norm Regularization

نویسنده

  • Mark Schmidt
چکیده

This project surveys and examines optimization approaches proposed for parameter estimation in Least Squares linear regression models with an L1 penalty on the regression coefficients. We first review linear regression and regularization, and both motivate and formalize this problem. We then give a detailed analysis of 8 of the varied approaches that have been proposed for optimizing this objective, 4 focusing on constrained formulations and 4 focusing on the unconstrained formulation. We then briefly survey closely related work on the orthogonal design case, approximate optimization, regularization parameter estimation, other loss functions, active application areas, and properties of L1 regularization. Illustrative implementations of each of these 8 methods are included with this document as a web resource.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Large-scale Inversion of Magnetic Data Using Golub-Kahan Bidiagonalization with Truncated Generalized Cross Validation for Regularization Parameter Estimation

In this paper a fast method for large-scale sparse inversion of magnetic data is considered. The L1-norm stabilizer is used to generate models with sharp and distinct interfaces. To deal with the non-linearity introduced by the L1-norm, a model-space iteratively reweighted least squares algorithm is used. The original model matrix is factorized using the Golub-Kahan bidiagonalization that proje...

متن کامل

Self-concordant analysis for logistic regression

Most of the non-asymptotic theoretical work in regression is carried out for the square loss, where estimators can be obtained through closed-form expressions. In this paper, we use and extend tools from the convex optimization literature, namely self-concordant functions, to provide simple extensions of theoretical results for the square loss to the logistic loss. We apply the extension techni...

متن کامل

A Primal Dual - Interior Point Framework for Using the L1-Norm or the L2-Norm on the Data and Regularization Terms of Inverse Problems

Maximum A Posteriori (MAP) estimates in inverse problems are often based on quadratic formulations, corresponding to a Least Squares fitting of the data and to the use of the L2 norm on the regularization term. While the implementation of this estimation is straightforward and usually based on the Gauss Newton method, resulting estimates are sensitive to outliers, and spatial distributions of t...

متن کامل

Efficient Minimization Methods of Mixed l2-l1 and l1-l1 Norms for Image Restoration

Image restoration problems are often solved by finding the minimizer of a suitable objective function. Usually this function consists of a data-fitting term and a regularization term. For the least squares solution, both the data-fitting and the regularization terms are in the 2 norm. In this paper, we consider the least absolute deviation (LAD) solution and the least mixed norm (LMN) solution....

متن کامل

Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares

BACKGROUND In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimiz...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005